10 research outputs found
Hermes: a Fast, Fault-Tolerant and Linearizable Replication Protocol
Today's datacenter applications are underpinned by datastores that are
responsible for providing availability, consistency, and performance. For high
availability in the presence of failures, these datastores replicate data
across several nodes. This is accomplished with the help of a reliable
replication protocol that is responsible for maintaining the replicas
strongly-consistent even when faults occur. Strong consistency is preferred to
weaker consistency models that cannot guarantee an intuitive behavior for the
clients. Furthermore, to accommodate high demand at real-time latencies,
datastores must deliver high throughput and low latency.
This work introduces Hermes, a broadcast-based reliable replication protocol
for in-memory datastores that provides both high throughput and low latency by
enabling local reads and fully-concurrent fast writes at all replicas. Hermes
couples logical timestamps with cache-coherence-inspired invalidations to
guarantee linearizability, avoid write serialization at a centralized ordering
point, resolve write conflicts locally at each replica (hence ensuring that
writes never abort) and provide fault-tolerance via replayable writes. Our
implementation of Hermes over an RDMA-enabled reliable datastore with five
replicas shows that Hermes consistently achieves higher throughput than
state-of-the-art RDMA-based reliable protocols (ZAB and CRAQ) across all write
ratios while also significantly reducing tail latency. At 5% writes, the tail
latency of Hermes is 3.6X lower than that of CRAQ and ZAB.Comment: Accepted in ASPLOS 202
Content Conditioning and Distribution for Dynamic Virtual Worlds
Metaverses are three-dimensional virtual worlds where anyone can add
and script new objects. Metaverses today, such as Second Life, are
dull, lifeless, and stagnant because users can see and interact with
only a tiny region around them, rather than a large and immersive
world. The next-generation Sirikata metaverse server scales to support
large, complex worlds, even as it allows users to see and interact
with the entire world. However, enabling large worlds poses a new
challenge to graphical clients to display high-quality scenes quickly
over a network.
Arbitrary 3D content is often not optimized for real-time rendering,
limiting the ability of clients to display large scenes consisting of
hundreds or thousands of objects. We present the design and
implementation of Sirikata's automatic, unsupervised conversion
process that transforms 3D content into a format suitable for
real-time rendering while minimizing loss of quality. The resulting
progressive format includes a base mesh, allowing clients to quickly
display the model, and a progressive portion for streaming additional
detail as desired.
3D models are large--often several megabytes in size. This poses a
challenge for online, interactive virtual worlds like Sirikata, where
3D content must be downloaded on-demand. When a client enters a scene
containing many objects and the models are not cached locally on the
client's device, it can take a long time to download, resulting in
poor visual fidelity. Deciding how to order downloads has a huge
impact on performance. Should a client download a higher texture
resolution for one model, or stream additional vertices for another?
Worse, underpowered clients might not be able to display a high
resolution mesh, resulting in wasted time downloading unneeded
content. Several metrics, such as the distance and scale of an object
in the scene or the camera angle of the observer, can be taken into
account when designing a scheduling algorithm. We present the design
and implementation of a framework for evaluating scheduling algorithms
for progressive meshes and we perform this evaluation on several
independent metrics and methods for combining metrics, including a
linear optimization algorithm. After a thorough evaluation, our
results show that a simple metric--solid angle--consistently
outperforms all other metrics
Object Storage on CRAQ High-throughput chain replication for read-mostly workloads
Massive storage systems typically replicate and partition data over many potentially-faulty components to provide both reliability and scalability. Yet many commerciallydeployed systems, especially those designed for interactive use by customers, sacrifice stronger consistency properties in the desire for greater availability and higher throughput. This paper describes the design, implementation, and evaluation of CRAQ, a distributed object-storage system that challenges this inflexible tradeoff. Our basic approach, an improvement on Chain Replication, maintains strong consistency while greatly improving read throughput. By distributing load across all object replicas, CRAQ scales linearly with chain size without increasing consistency coordination. At the same time, it exposes noncommitted operations for weaker consistency guarantees when this suffices for some applications, which is especially useful under periods of high system churn. This paper explores additional design and implementation considerations for geo-replicated CRAQ storage across multiple datacenters to provide locality-optimized operations. We also discuss multi-object atomic updates and multicast optimizations for large-object updates.
UNSUPERVISED CONVERSION OF 3D MODELS FOR INTERACTIVE METAVERSES
A virtual-world environment becomes a truly engaging platform when users have the ability to insert 3D content into the world. However, arbitrary 3D content is often not optimized for real-time rendering, limiting the ability of clients to display large scenes consisting of hundreds or thousands of objects. We present the design and implementation of an automatic, unsupervised conversion process that transforms 3D content into a format suitable for real-time rendering while minimizing loss of quality. The resulting progressive format includes a base mesh, allowing clients to quickly display the model, and a progressive portion for streaming additional detail as desired. Sirikata, an open virtual world platform, has processed over 700 models using this method
JavaScript in JavaScript (js.js): Sandboxing third-party scripts
Running on billions of today’s computing devices, JavaScript has become a ubiquitous platform for deploying web applications. Unfortunately, an application developer who wishes to include a third-party script must enter into an implicit trust relationship with the thirdparty—granting it unmediated access to its entire application content. In this paper, we present js.js, a JavaScript interpreter (which runs in JavaScript) that allows an application to execute a third-party script inside a completely isolated, sandboxed environment. An application can, at runtime, create and interact with the objects, properties, and methods available from within the sandboxed environment, giving it complete control over the third-party script. js.js supports the full range of the JavaScript language, is compatible with major browsers, and is resilient to attacks from malicious scripts. We conduct a performance evaluation quantifying the overhead of using js.js and present an example of using js.js to execute Twitter’s Tweet Button API.